98 research outputs found
SAFE: Scale Aware Feature Encoder for Scene Text Recognition
In this paper, we address the problem of having characters with different
scales in scene text recognition. We propose a novel scale aware feature
encoder (SAFE) that is designed specifically for encoding characters with
different scales. SAFE is composed of a multi-scale convolutional encoder and a
scale attention network. The multi-scale convolutional encoder targets at
extracting character features under multiple scales, and the scale attention
network is responsible for selecting features from the most relevant scale(s).
SAFE has two main advantages over the traditional single-CNN encoder used in
current state-of-the-art text recognizers. First, it explicitly tackles the
scale problem by extracting scale-invariant features from the characters. This
allows the recognizer to put more effort in handling other challenges in scene
text recognition, like those caused by view distortion and poor image quality.
Second, it can transfer the learning of feature encoding across different
character scales. This is particularly important when the training set has a
very unbalanced distribution of character scales, as training with such a
dataset will make the encoder biased towards extracting features from the
predominant scale. To evaluate the effectiveness of SAFE, we design a simple
text recognizer named scale-spatial attention network (S-SAN) that employs SAFE
as its feature encoder, and carry out experiments on six public benchmarks.
Experimental results demonstrate that S-SAN can achieve state-of-the-art (or,
in some cases, extremely competitive) performance without any post-processing.Comment: ACCV201
Open-ended Learning in Symmetric Zero-sum Games
Zero-sum games such as chess and poker are, abstractly, functions that
evaluate pairs of agents, for example labeling them `winner' and `loser'. If
the game is approximately transitive, then self-play generates sequences of
agents of increasing strength. However, nontransitive games, such as
rock-paper-scissors, can exhibit strategic cycles, and there is no longer a
clear objective -- we want agents to increase in strength, but against whom is
unclear. In this paper, we introduce a geometric framework for formulating
agent objectives in zero-sum games, in order to construct adaptive sequences of
objectives that yield open-ended learning. The framework allows us to reason
about population performance in nontransitive games, and enables the
development of a new algorithm (rectified Nash response, PSRO_rN) that uses
game-theoretic niching to construct diverse populations of effective agents,
producing a stronger set of agents than existing algorithms. We apply PSRO_rN
to two highly nontransitive resource allocation games and find that PSRO_rN
consistently outperforms the existing alternatives.Comment: ICML 2019, final versio
Synthetically Supervised Feature Learning for Scene Text Recognition
We address the problem of image feature learning for scene text recognition. The image features in the state-of-the-art methods are learned from large-scale synthetic image datasets. However, most meth- ods only rely on outputs of the synthetic data generation process, namely realistically looking images, and completely ignore the rest of the process. We propose to leverage the parameters that lead to the output images to improve image feature learning. Specifically, for every image out of the data generation process, we obtain the associated parameters and render another “clean” image that is free of select distortion factors that are ap- plied to the output image. Because of the absence of distortion factors, the clean image tends to be easier to recognize than the original image which can serve as supervision. We design a multi-task network with an encoder-discriminator-generator architecture to guide the feature of the original image toward that of the clean image. The experiments show that our method significantly outperforms the state-of-the-art methods on standard scene text recognition benchmarks in the lexicon-free cate- gory. Furthermore, we show that without explicit handling, our method works on challenging cases where input images contain severe geometric distortion, such as text on a curved path
Mask TextSpotter: An End-to-End Trainable Neural Network for Spotting Text with Arbitrary Shapes
Recently, models based on deep neural networks have dominated the fields of
scene text detection and recognition. In this paper, we investigate the problem
of scene text spotting, which aims at simultaneous text detection and
recognition in natural images. An end-to-end trainable neural network model for
scene text spotting is proposed. The proposed model, named as Mask TextSpotter,
is inspired by the newly published work Mask R-CNN. Different from previous
methods that also accomplish text spotting with end-to-end trainable deep
neural networks, Mask TextSpotter takes advantage of simple and smooth
end-to-end learning procedure, in which precise text detection and recognition
are acquired via semantic segmentation. Moreover, it is superior to previous
methods in handling text instances of irregular shapes, for example, curved
text. Experiments on ICDAR2013, ICDAR2015 and Total-Text demonstrate that the
proposed method achieves state-of-the-art results in both scene text detection
and end-to-end text recognition tasks.Comment: To appear in ECCV 201
- …